Abstract: Autonomous driving is a promising technology to improve transportation in our society. In this paper an end-to-end approach to learn how to steer a car autonomously from images is proposed. The approach proposes to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. This representation provides a set of compact yet task-specific complete summary of the scene to enable a simple controller to drive a car autonomously. Convolutional Neural Networks (CNNs) trained to output steering wheel angle commands from front camera images centered on the road. For this demonstration deep Convolutional Neural Network is trained using recording of human driving and show that our model can work well to drive a car in a very diverse set of virtual environments. Results show that this approach can generalize well to real driving images.

Keywords: Image Preprocessing, Convolutional Neural Networks (CNN), Object Detection, Machine Learning, OpenCV.